213 research outputs found

    RFID Security: A Method for Tracking Prevention

    Get PDF

    PARTS – Privacy-Aware Routing with Transportation Subgraphs

    Get PDF
    To ensure privacy for route planning applications and other location based services (LBS), the service provider must be prevented from tracking a user’s path during navigation on the application level. However, the navigation functionality must be preserved. We introduce the algorithm PARTS to split route requests into route parts which will be submitted to an LBS in an unlinkable way. Equipped with the usage of dummy requests and time shifting, our approach can achieve better privacy. We will show that our algorithm protects privacy in the presence of a realistic adversary model while maintaining the service quality

    Simple Proofs of Space-Time and Rational Proofs of Storage

    Get PDF
    We introduce a new cryptographic primitive: Proofs of Space-Time (PoSTs) and construct an extremely simple, practical protocol for implementing these proofs. A PoST allows a prover to convince a verifier that she spent a ``space-time\u27\u27 resource (storing data---space---over a period of time). Formally, we define the PoST resource as a trade-off between CPU work and space-time (under reasonable cost assumptions, a rational user will prefer to use the lower-cost space-time resource over CPU work). Compared to a proof-of-work, a PoST requires less energy use, as the ``difficulty\u27\u27 can be increased by extending the time period over which data is stored without increasing computation costs. Our definition is very similar to ``Proofs of Space\u27\u27 [ePrint 2013/796, 2013/805] but, unlike the previous definitions, takes into account amortization attacks and storage duration. Moreover, our protocol uses a very different (and much simpler) technique, making use of the fact that we explicitly allow a space-time tradeoff, and doesn\u27t require any non-standard assumptions (beyond random oracles). Unlike previous constructions, our protocol allows incremental difficulty adjustment, which can gracefully handle increases in the price of storage compared to CPU work. In addition, we show how, in a cryptocurrency context, the parameters of the scheme can be adjusted using a market-based mechanism, similar in spirit to the difficulty adjustment for PoW protocols

    Semantically Secure Anonymity: Foundations of Re-encryption

    Get PDF
    The notion of universal re-encryption is an established primitive used in the design of many anonymity protocols. It allows anyone to randomize a ciphertext without changing its size, without first decrypting it, and without knowing who the receiver is (i.e., not knowing the public key used to create it). By design it prevents the randomized ciphertext from being correlated with the original ciphertext. We revisit and analyze the security foundation of universal re-encryption and show a subtlety in it, namely, that it does not require that the encryption function achieve key anonymity. Recall that the encryption function is different from the re-encryption function. We demonstrate this subtlety by constructing a cryptosystem that satisfies the established definition of a universal cryptosystem but that has an encryption function that does not achieve key anonymity, thereby instantiating the gap in the definition of security of universal re-encryption. We note that the gap in the definition carries over to a set of applications that rely on universal re-encryption, applications in the original paper on universal re-encryption and also follow-on work. This shows that the original definition needs to be corrected and it shows that it had a knock-on effect that negatively impacted security in later work. We then introduce a new definition that includes the properties that are needed for a re-encryption cryptosystem to achieve key anonymity in both the encryption function and the re-encryption function, building on Goldwasser and Micali\u27s semantic security and the original key anonymity notion of Bellare, Boldyreva, Desai, and Pointcheval. Omitting any of the properties in our definition leads to a problem. We also introduce a new generalization of the Decision Diffie-Hellman (DDH) random self-reduction and use it, in turn, to prove that the original ElGamal-based universal cryptosystem of Golle et al is secure under our revised security definition. We apply our new DDH reduction technique to give the first proof in the standard model that ElGamal-based incomparable public keys achieve key anonymity under DDH. We present a novel secure Forward-Anonymous Batch Mix as a new application

    Strengthening Access Control Encryption

    Get PDF
    Access control encryption (ACE) was proposed by Damgård et al. to enable the control of information flow between several parties according to a given policy specifying which parties are, or are not, allowed to communicate. By involving a special party, called the sanitizer, policy-compliant communication is enabled while policy-violating communication is prevented, even if sender and receiver are dishonest. To allow outsourcing of the sanitizer, the secrecy of the message contents and the anonymity of the involved communication partners is guaranteed. This paper shows that in order to be resilient against realistic attacks, the security definition of ACE must be considerably strengthened in several ways. A new, substantially stronger security definition is proposed, and an ACE scheme is constructed which provably satisfies the strong definition under standard assumptions. Three aspects in which the security of ACE is strengthened are as follows. First, CCA security (rather than only CPA security) is guaranteed, which is important since senders can be dishonest in the considered setting. Second, the revealing of an (unsanitized) ciphertext (e.g., by a faulty sanitizer) cannot be exploited to communicate more in a policy-violating manner than the information contained in the ciphertext. We illustrate that this is not only a definitional subtlety by showing how in known ACE schemes, a single leaked unsanitized ciphertext allows for an arbitrary amount of policy-violating communication. Third, it is enforced that parties specified to receive a message according to the policy cannot be excluded from receiving it, even by a dishonest sender

    Reducing Time Complexity in RFID Systems

    Get PDF
    Radio frequency identification systems based on low-cost computing devices is the new plaything that every company would like to adopt. Its goal can be either to improve the productivity or to strengthen the security. Specific identification protocols based on symmetric challenge-response have been developed in order to assure the privacy of the device bearers. Although these protocols fit the devices' constraints, they always suffer from a large time complexity. Existing protocols require O(n) cryptographic operations to identify one device among n. Molnar and Wagner suggested a method to reduce this complexity to O(log n). We show that their technique could degrade the privacy if the attacker has the possibility to tamper with at least one device. Because low-cost devices are not tamper-resistant, such an attack could be feasible. We give a detailed analysis of their protocol and evaluate the threat. Next, we extend an approach based on time-memory trade-offs whose goal is to improve Ohkubo, Suzuki, and Kinoshita's protocol. We show that in practice this approach reaches the same performances as Molnar and Wagner's method, without degrading privacy. Radio frequency identification systems based on low-cost computing devices is the new plaything that every company would like to adopt. Its goal can be either to improve the productivity or to strengthen the security. Specific identification protocols based on symmetric challenge-response have been developed in order to assure the privacy of the device bearers. Although these protocols fit the devices' constraints, they always suffer from a large time complexity. Existing protocols require O(n) cryptographic operations to identify one device among n. Molnar and Wagner suggested a method to reduce this complexity to O(log n). We show that their technique could degrade the privacy if the attacker has the possibility to tamper with at least one device. Because low-cost devices are not tamper-resistant, such an attack could be feasible. We give a detailed analysis of their protocol and evaluate the threat. Next, we extend an approach based on time-memory trade-offs whose goal is to improve Ohkubo, Suzuki, and Kinoshita's protocol. We show that in practice this approach reaches the same performances as Molnar and Wagner's method, without degrading privacy

    The re-identification risk of Canadians from longitudinal demographics

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The public is less willing to allow their personal health information to be disclosed for research purposes if they do not trust researchers and how researchers manage their data. However, the public is more comfortable with their data being used for research if the risk of re-identification is low. There are few studies on the risk of re-identification of Canadians from their basic demographics, and no studies on their risk from their longitudinal data. Our objective was to estimate the risk of re-identification from the basic cross-sectional and longitudinal demographics of Canadians.</p> <p>Methods</p> <p>Uniqueness is a common measure of re-identification risk. Demographic data on a 25% random sample of the population of Montreal were analyzed to estimate population uniqueness on postal code, date of birth, and gender as well as their generalizations, for periods ranging from 1 year to 11 years.</p> <p>Results</p> <p>Almost 98% of the population was unique on full postal code, date of birth and gender: these three variables are effectively a unique identifier for Montrealers. Uniqueness increased for longitudinal data. Considerable generalization was required to reach acceptably low uniqueness levels, especially for longitudinal data. Detailed guidelines and disclosure policies on how to ensure that the re-identification risk is low are provided.</p> <p>Conclusions</p> <p>A large percentage of Montreal residents are unique on basic demographics. For non-longitudinal data sets, the three character postal code, gender, and month/year of birth represent sufficiently low re-identification risk. Data custodians need to generalize their demographic information further for longitudinal data sets.</p

    Topology-Hiding Computation Beyond Semi-Honest Adversaries

    Get PDF
    Topology-hiding communication protocols allow a set of parties, connected by an incomplete network with unknown communication graph, where each party only knows its neighbors, to construct a complete communication network such that the network topology remains hidden even from a powerful adversary who can corrupt parties. This communication network can then be used to perform arbitrary tasks, for example secure multi-party computation, in a topology-hiding manner. Previously proposed protocols could only tolerate passive corruption. This paper proposes protocols that can also tolerate fail-corruption (i.e., the adversary can crash any party at any point in time) and so-called semi-malicious corruption (i.e., the adversary can control a corrupted party\u27s randomness), without leaking more than an arbitrarily small fraction of a bit of information about the topology. A small-leakage protocol was recently proposed by Ball et al. [Eurocrypt\u2718], but only under the unrealistic set-up assumption that each party has a trusted hardware module containing secret correlated pre-set keys, and with the further two restrictions that only passively corrupted parties can be crashed by the adversary, and semi-malicious corruption is not tolerated. Since leaking a small amount of information is unavoidable, as is the need to abort the protocol in case of failures, our protocols seem to achieve the best possible goal in a model with fail-corruption. Further contributions of the paper are applications of the protocol to obtain secure MPC protocols, which requires a way to bound the aggregated leakage when multiple small-leakage protocols are executed in parallel or sequentially. Moreover, while previous protocols are based on the DDH assumption, a new so-called PKCR public-key encryption scheme based on the LWE assumption is proposed, allowing to base topology-hiding computation on LWE. Furthermore, a protocol using fully-homomorphic encryption achieving very low round complexity is proposed

    Supporting Publication and Subscription Confidentiality in Pub/Sub Networks

    Get PDF
    The publish/subscribe model offers a loosely-coupled communication paradigm where applications interact indirectly and asynchronously. Publisher applications generate events that are sent to interested applications through a network of brokers. Subscriber applications express their interest by specifying filters that brokers can use for routing the events. Supporting confidentiality of messages being exchanged is still challenging. First of all, it is desirable that any scheme used for protecting the confidentiality of both the events and filters should not require the publishers and subscribers to share secret keys. In fact, such a restriction is against the loose-coupling of the model. Moreover, such a scheme should not restrict the expressiveness of filters and should allow the broker to perform event filtering to route the events to the interested parties. Existing solutions do not fully address those issues. In this paper, we provide a novel scheme that supports (i) confidentiality for events and filters; (ii) filters can express very complex constraints on events even if brokers are not able to access any information on both events and filters; (iii) and finally it does not require publishers and subscribers to share keys
    corecore